Current Issue : July - September Volume : 2019 Issue Number : 3 Articles : 5 Articles
The Cordoba Guitar Festival is one of the most important cultural events in Spain.\nThis article analyses the musical preferences, satisfaction, attitudinal loyalty, and behavioural loyalty\nof spectators who attended the 36th festival held in July 2016, as well as the festivalâ??s economic\nimpact on the city. These characteristics of the public give rise to the four hypotheses of this study.\nTo achieve this aim, a structural equation model (SEM) was used. The results.............
Speaking is a way for humans to communicate with others using language. The ability to\nspeak according to the speaker is very diverse. In general, language skills improve as intelligence\nimproves. Therefore, it is known that the analysis of a speakerâ??s utterances is a good tool to evaluate\nthe intellectual maturity of the speaker. Until recently, these evaluations have been done manually\nbased on the experience of a handful of experts, but this approach is not only time consuming and\ncostly, but also highly subjective. In this paper, we propose a Korean automatic speech analysis\nsystem based on Natural Language Processing (NLP) and web service to solve this problem. For this\nstudy, we constructed a web service based on Django to respond to the requests of various users.\nWhen a user delivered a transcription file of utterances to the server via the web, the server analyzed\nthe speech ability of the speaker based on various indicators. The server compared the transcription\nfile with the language ability indicators of persons of the same age as the speaker and displayed the\nresult immediately to the user. In this study, we used KoNLPy, a Korean language-processing tool.\nThe automatic speech analysis service analyzed not only the overall language ability of the speaker\nbut also individual domains such as sentence completion ability and vocabulary ability. In addition,\na faster and immediate service was made possible without sacrificing accuracy as compared to\nhuman analysis....
To rescue and preserve an endangered language, this paper studied an end-to-end speech\nrecognition model based on sample transfer learning for the low-resource Tujia language. From the\nperspective of the Tujia language international phonetic alphabet (IPA) label layer, using Chinese\ncorpus as an extension of the Tujia language can effectively solve the problem of an insufficient\ncorpus in the Tujia language, constructing a cross-language corpus and an IPA dictionary that is\nunified between the Chinese and Tujia languages. The convolutional neural network (CNN) and\nbi-directional long short-term memory (BiLSTM) network were used to extract the cross-language\nacoustic features and train shared hidden layer weights for the Tujia language and Chinese phonetic\ncorpus. In addition, the automatic speech recognition function of the Tujia language was realized\nusing the end-to-end method that consists of symmetric encoding and decoding. Furthermore,\ntransfer learning was used to establish the model of the cross-language end-to-end Tujia language\nrecognition system. The experimental results showed that the recognition error rate of the proposed\nmodel is 46.19%, which is 2.11% lower than the that of the model that only used the Tujia language\ndata for training. Therefore, this approach is feasible and effective....
It is noteworthy nowadays that monitoring and understanding a humanâ??s emotional state\nplays a key role in the current and forthcoming computational technologies. On the other hand,\nthis monitoring and analysis should be as unobtrusive as possible, since in our era the digital world\nhas been smoothly adopted in everyday life activities. In this framework and within the domain\nof assessing humansâ?? affective state during their educational training, the most popular way to\ngo is to use sensory equipment that would allow their observing without involving any kind of\ndirect contact. Thus, in this work, we focus on human emotion recognition from audio stimuli (i.e.,\nhuman speech) using a novel approach based on a computer vision inspired methodology, namely\nthe bag-of-visual words method, applied on several audio segment spectrograms. The latter are\nconsidered to be the visual representation of the considered audio segment and may be analyzed\nby exploiting well-known traditional computer vision techniques, such as construction of a visual\nvocabulary, extraction of speeded-up robust features (SURF) features, quantization into a set of\nvisual words, and image histogram construction. As a last step, support vector machines (SVM)\nclassifiers are trained based on the aforementioned information. Finally, to further generalize the\nherein proposed approach, we utilize publicly available datasets from several human languages to\nperform cross-language experiments, both in terms of actor-created and real-life ones....
IoT devices are now enriching peopleâ??s life. However, the security of IoT devices seldom\nattracts manufacturersâ?? attention. There are already some solutions to the problem of connecting a\nsmart device to a userâ??s wireless network based on the 802.11 transmission such as Smart Config\nfrom TI. However, it is insecure in many situations, and it does not have a satisfactory transmission\nspeed, which does not mean that it has a low bit rate. It usually takes a long time for the device to\nrecognize the data it receives and decode them. In this paper, we propose a newWi-Fi connection\nmethod based on audio waves. This method is based on MFSK (Multiple frequency-shift keying)\nand works well in short distance, which enables the correctness and efficiency. In addition, audio\nwaves can hardly be eavesdropped, which provides higher security than other methods. We also put\nforward an encryption solution by using jamming signal, which can greatly improve the security of\nthe transmission....
Loading....